Jakeescape99:
According to Chat-GPT, only 0.25% of the US population has a BMI of 50 or more. I seriously thought that it was way more than that lol. I'd have to be over 400lbs to hit that BMI, but I think that's going to have to be my goal lol.
Munchies:
Thus is a beautiful example of why you should not use Chat-GPT for research.
The highest BMI category the CDC and the NIH track is 40 or more - which is severe obesity. This is a little over 9% of the US population. There's no tracking for Americans with a BMI of 50% or more.
In other words, Chat-GPT hallucinated this information.
Letters And Numbers:
Chat gpt will give its sources if you ask sometimes. It might not be hallucinated, it also might not be accurate. Although I don’t know how reliable CDC/NIH data is going forward, either. Might be an issue by issue thing.
LLMs always hallucinate, even when the responses they give are accurate.
Its not about the results, though those provide hints, its about the way they function, that is LLMs work purely on statistics and not some kind of rule-based system.
They sometimes deliver correct responses, yes, but even a broken clock is right twice a day!
In other words, you can arrive at a correct conclusion through hallucinating, there is no reason for which you can't, its just that there are far better ways to reach conclusions.